Multitask Learning via Mixture of Linear Subspaces
نویسندگان
چکیده
We propose a probabilistic generative model for multitask learning that exploits the cluster structure of the task parameters, and additionally imposes a low-rank constraint on the set of task parameters within each cluster. This leads to a sharing of statistical strengths of multiple tasks at two levels: (1) via cluster assumption, and (2) via a subspace assumption within each cluster. Our work brings in the benefits of both these aspects of task relationship, each of which has been addressed only individually in prior work. We assume a mixture of linear subspaces model on the latent task parameters that can capture both these aspects simultaneously. Furthermore, the mixture of subspaces assumption can model the fact that the task parameters could potentially live on a non-linear manifold instead of a linear subspace which is a restriction of earlier work on multitask learning based on the linear subspace assumption.
منابع مشابه
Multitask Sparsity via Maximum Entropy Discrimination
A multitask learning framework is developed for discriminative classification and regression where multiple large-margin linear classifiers are estimated for different prediction problems. These classifiers operate in a common input space but are coupled as they recover an unknown shared representation. A maximum entropy discrimination (MED) framework is used to derive the multitask algorithm w...
متن کاملGeneralized Dictionary for Multitask Learning with Boosting
While multitask learning has been extensively studied, most existing methods rely on linear models (e.g. linear regression, logistic regression), which may fail in dealing with more general (nonlinear) problems. In this paper, we present a new approach that combines dictionary learning with gradient boosting to achieve multitask learning with general (nonlinear) basis functions. Specifically, f...
متن کاملFeature Hashing for Large Scale Multitask Learning
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case — multit...
متن کاملInfinite Predictor Subspace Models for Multitask Learning
Given several related learning tasks, we propose a nonparametric Bayesian model that captures task relatedness by assuming that the task parameters (i.e., predictors) share a latent subspace. More specifically, the intrinsic dimensionality of the task subspace is not assumed to be known a priori. We use an infinite latent feature model to automatically infer this number (depending on and limite...
متن کاملFlexible Modeling of Latent Task Structures in Multitask Learning
Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks. However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem. Ideally, the “right” latent task structure should be learned in a data-driven manner. We present a flexible, nonparametric Bayesian mode...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010